#Computer Vision Model
Explore tagged Tumblr posts
aromanticasterisms · 7 months ago
Text
getting to the bottom of the new area and going oh 1. ajaw was telling the truth about what (he thinks) he was 2. so that's why he looks like that 3. did kinich go to ochkanatlan to meet him or was he set up somewhere else
#personal stuff#thorn plays genshin#I MEAN. I PRESUME??#otherwise it's just a coincidence that he's named Divine Rulership and mentioned by name. maybe he named himself that but c'mon#anyway head in hands oh my god. lore.#automatons modeled after dragons....yeah.... like the humans made automatons modeled after humans. wouldn't dragons do the same#cannot believe we just. killed them. no questions asked. they had 30 years to go we couldn't have like. asked them some questions first.#but anyway yeah presumably the land of seven flames was pretty big? not Just ochkanatlan. so ajaw Could have been elsewhere#were they in different places? or was ochkanatlan pretty much it. hm#anyway haha. what the fuck were those holy sovereign's notes huh#''she showed me all there was to know about the ancient empire:#''that ladder that climbed up to the firmament. those weapons converted from (...); those cannons that could tear (...) to pieces;#''those (...) that fell from the three moons; the research about (...) and wishes...''#HELLO? HELLOOO??#IS ANYBODY THERE.#[we knew most of this stuff already but hearing it CONFIRMED like this is making me insane]#the divine ladder [hinted at in the spiral abyss description] climbing up to the firmament [false sky]#those weapons [gnoses perhaps?] converted from [third descender's corpse if so]#are ''the cannons'' referring to the same thing? or does celestia have. oh fuck sentence canceled. the nails???#the research about something and wishes [visions]. but what was the other thing. hmm#ALSO WHAT FELL FROM THE THREE [destroyed] MOONS. WHAT DON'T WE KNOW. HELLO.#also i initially took her ''as a long lived species memory is a curse'' to mean like. mara. or erosion#which might be the case but also like. storage space. memory. on a computer...
11 notes · View notes
frank-olivier · 8 months ago
Text
Tumblr media
Bayesian Active Exploration: A New Frontier in Artificial Intelligence
The field of artificial intelligence has seen tremendous growth and advancements in recent years, with various techniques and paradigms emerging to tackle complex problems in the field of machine learning, computer vision, and natural language processing. Two of these concepts that have attracted a lot of attention are active inference and Bayesian mechanics. Although both techniques have been researched separately, their synergy has the potential to revolutionize AI by creating more efficient, accurate, and effective systems.
Traditional machine learning algorithms rely on a passive approach, where the system receives data and updates its parameters without actively influencing the data collection process. However, this approach can have limitations, especially in complex and dynamic environments. Active interference, on the other hand, allows AI systems to take an active role in selecting the most informative data points or actions to collect more relevant information. In this way, active inference allows systems to adapt to changing environments, reducing the need for labeled data and improving the efficiency of learning and decision-making.
One of the first milestones in active inference was the development of the "query by committee" algorithm by Freund et al. in 1997. This algorithm used a committee of models to determine the most meaningful data points to capture, laying the foundation for future active learning techniques. Another important milestone was the introduction of "uncertainty sampling" by Lewis and Gale in 1994, which selected data points with the highest uncertainty or ambiguity to capture more information.
Bayesian mechanics, on the other hand, provides a probabilistic framework for reasoning and decision-making under uncertainty. By modeling complex systems using probability distributions, Bayesian mechanics enables AI systems to quantify uncertainty and ambiguity, thereby making more informed decisions when faced with incomplete or noisy data. Bayesian inference, the process of updating the prior distribution using new data, is a powerful tool for learning and decision-making.
One of the first milestones in Bayesian mechanics was the development of Bayes' theorem by Thomas Bayes in 1763. This theorem provided a mathematical framework for updating the probability of a hypothesis based on new evidence. Another important milestone was the introduction of Bayesian networks by Pearl in 1988, which provided a structured approach to modeling complex systems using probability distributions.
While active inference and Bayesian mechanics each have their strengths, combining them has the potential to create a new generation of AI systems that can actively collect informative data and update their probabilistic models to make more informed decisions. The combination of active inference and Bayesian mechanics has numerous applications in AI, including robotics, computer vision, and natural language processing. In robotics, for example, active inference can be used to actively explore the environment, collect more informative data, and improve navigation and decision-making. In computer vision, active inference can be used to actively select the most informative images or viewpoints, improving object recognition or scene understanding.
Timeline:
1763: Bayes' theorem
1988: Bayesian networks
1994: Uncertainty Sampling
1997: Query by Committee algorithm
2017: Deep Bayesian Active Learning
2019: Bayesian Active Exploration
2020: Active Bayesian Inference for Deep Learning
2020: Bayesian Active Learning for Computer Vision
The synergy of active inference and Bayesian mechanics is expected to play a crucial role in shaping the next generation of AI systems. Some possible future developments in this area include:
- Combining active inference and Bayesian mechanics with other AI techniques, such as reinforcement learning and transfer learning, to create more powerful and flexible AI systems.
- Applying the synergy of active inference and Bayesian mechanics to new areas, such as healthcare, finance, and education, to improve decision-making and outcomes.
- Developing new algorithms and techniques that integrate active inference and Bayesian mechanics, such as Bayesian active learning for deep learning and Bayesian active exploration for robotics.
Dr. Sanjeev Namjosh: The Hidden Math Behind All Living Systems - On Active Inference, the Free Energy Principle, and Bayesian Mechanics (Machine Learning Street Talk, October 2024)
youtube
Saturday, October 26, 2024
6 notes · View notes
sandimexicola · 1 year ago
Text
Tumblr media
Crystal vision.
6 notes · View notes
mitchell-jake · 19 days ago
Text
7 Key Features to Include in CV Powered Parental App Development
Tumblr media
Keeping kids safe online isn't easy. With new content and platforms emerging daily, parents need help. CV powered parental app development fills that gap with smart features built to monitor, protect, and support children in their digital lives. Here's a closer look at the core functionalities that make these apps truly powerful.
1. Real-Time AI Filtering of Visual Content
Kids see thousands of images and videos online every day. Not all of it is appropriate. Real-time AI filtering scans this content instantly, using computer vision to block nudity, violence, or other harmful visuals.
Filters content across browsers, apps, and social media.
Works instantly—no delay between detection and action.
Lets kids use platforms safely instead of blocking them completely.
This feature builds trust. Kids aren’t totally restricted, and parents feel secure knowing dangerous visuals won’t slip through.
2. Screen Time Management and Customizable Limits
Too much screen time can harm sleep, focus, and family time. CV powered parental app development includes tools that let parents set healthy usage boundaries.
Daily time limits for each app or device.
Scheduled downtimes during school, meals, or bedtime.
Alerts and summaries help parents track usage.
These tools promote balance without the need for constant arguments. Parents regain control without nagging. Kids learn responsibility.
3. Advanced S*xting Prevention
S*xting has become a serious issue among teens. AI and CV technology can detect suggestive or explicit content before it's sent.
Scans images stored or captured on the device.
Flags risky content and alerts parents discreetly.
Locks or hides questionable images until reviewed.
This feature not only prevents risky behavior but also opens doors for important conversations between parents and kids.
4. Social Media Monitoring for Child Safety
Many online risks come through social platforms. From cyberbullying to unwanted messages, social media is full of unseen dangers.
Monitors texts, images, and interactions on platforms like Instagram, TikTok, and Snapchat.
Alerts parents to dangerous messages or patterns.
Offers insights without reading every conversation.
Kids get to enjoy social apps. Parents gain peace of mind.
5. Seamless Device and App Management
Managing devices across a family can get messy. A solid CV powered parental app offers centralized control.
Manage multiple devices from one dashboard.
Allow or block specific apps with a tap.
Create profiles for each child with age-specific settings.
Everything stays synced and simple. No need to set rules on each device manually.
6. GPS Tracking and Geo-Fencing Capabilities
Online safety and physical safety go hand in hand. GPS tracking ensures that parents know where their kids are at all times.
Live location updates for all connected devices.
Set safe zones like school, home, or parks.
Get notified when kids enter or leave these zones.
This builds an extra layer of trust and security for both parents and children.
7. Tamper-Proof Security Features
Tech-savvy kids might try to bypass controls. These features make sure the rules stay in place.
Blocks attempts to uninstall or disable the app.
Alerts parents of suspicious device behavior.
Ensures protection remains active at all times.
With this, parents don't have to worry about kids finding workarounds.
Importance of AI in These Features
Computer vision and AI drive every feature mentioned above. Without them, the app wouldn’t be smart enough to adapt to fast-changing threats.
Leveraging AI to Detect Inappropriate Content
AI models trained on massive datasets learn to spot the signs of explicit content. They don't just flag obvious things. They detect context, background elements, and more.
Detects pornography, nudity, and violence in visual content.
Understands evolving trends and new forms of inappropriate content.
Continues learning from feedback and corrections.
This level of detection is not possible through simple keyword or link blocking.
How AI Enhances Real-Time Filtering
Parents need fast, seamless protection—not laggy apps. With AI, scanning happens instantly.
Content is analyzed as it loads or appears.
Harmful material is blocked before it’s visible.
No need to wait for reports or summaries.
That’s a huge relief for parents who want to let kids explore safely.
Ensuring Accuracy with AI-Powered Monitoring
Accuracy matters. Too many false alarms create frustration. Too few put kids at risk.
AI reduces false positives and negatives over time.
Trained on diverse, real-world data to ensure balance.
Refined continuously through usage.
Apps built with strong AI foundations provide reliable, consistent protection that adapts.
Building a CV Powered Parental App for future
Keeping up with tech is tough. Parents need tools that don’t become outdated a year later. Future-proofing should be part of your CV powered parental app development strategy.
Scalability and Customization Options
As families grow and kids get older, needs change. So should your app.
Add more devices or family members easily.
Adjust content filters based on age or maturity.
Scale up without needing to switch platforms.
This keeps the app useful for years.
Continuous Model Updates for Better Accuracy
Threats change constantly. What’s harmful today might look different tomorrow.
Frequent updates to AI models ensure up-to-date detection.
Feedback loops improve performance over time.
Keeps the app relevant as digital behavior evolves.
Parents stay ahead of online risks—not behind them.
User-Friendly Interfaces for Parents and Kids
A great app is only as good as its usability. If it’s hard to use, it won’t work.
Clean dashboards for parents to manage settings.
Simple alerts and insights, no tech jargon.
Child-friendly interfaces that respect their privacy.
This helps families adopt the app quickly and use it effectively.
Conclusion
AI is the secret weapon in CV powered parental app development. It empowers parents with tools that work in real-time, adapt to change, and protect without over-restricting. Kids can enjoy the internet safely while parents gain peace of mind.
The Power of CV Technology in Online Safety
Computer vision adds a much-needed dimension to digital parenting. It understands images and video—not just text. That’s essential in a world full of visual content. When combined with AI, it creates a safety net that’s both smart and scalable.
If you're considering building your own CV powered parental control app, look for a development partner who knows what they're doing. Firms like Idea Usher have already built AI-powered solutions that keep families safe while balancing freedom and protection. Their experience can help you launch a secure, reliable app that genuinely makes a difference.
Because at the end of the day, safety isn’t just about blocking content, it’s about CV powered parental control app revolutionizing the digital world that grows with your child.
0 notes
insteptechnologies123 · 1 month ago
Text
0 notes
fuerst-von-plan1 · 7 months ago
Text
Optimierung der Schadensbewertung durch Maschinelles Lernen
Die Schadensbewertung ist ein zentraler Bestandteil im Bereich von Versicherungen, Schadensregulierungen und der allgemeinen Risikobewertung. Traditionelle Methoden zur Schadensbewertung sind oft zeitaufwendig und fehleranfällig, was zu Verzögerungen und Unsicherheiten führen kann. Mit der zunehmenden Verbreitung von Technologien im Bereich des maschinellen Lernens (ML) eröffnen sich neue…
0 notes
projectchampionz · 10 months ago
Text
Explore These Exciting DSU Micro Project Ideas
Explore These Exciting DSU Micro Project Ideas Are you a student looking for an interesting micro project to work on? Developing small, self-contained projects is a great way to build your skills and showcase your abilities. At the Distributed Systems University (DSU), we offer a wide range of micro project topics that cover a variety of domains. In this blog post, we’ll explore some exciting DSU…
0 notes
jupiterswasphouse · 13 days ago
Text
Not certain if this has already been posted about here, but iNaturalist recently uploaded a blog post stating that they had received a grant from Google to incorporate new forms of generative AI into their 'computer vision' model.
I'm sure I don't need to tell most of you why this is a horrible idea, that does away with much of the trust gained by the thus far great service that is iNaturalist. But, to elaborate on my point, to collaborate with Google on tools such as these is a slap in the face to much of the userbase, including a multitude of biological experts and conservationists across the globe.
They claim that they will work hard to make sure that the identification information provided by the AI tools is of the highest quality, which I do not entirely doubt from this team. I would hope that there is a thorough vetting process in place for this information (Though, if you need people to vet the information, what's the point of the generative AI over a simple wiki of identification criteria). Nonetheless, if you've seen Google's (or any other tech company's) work in this field in the past, which you likely have, you will know that these tools are not ready to explain the nuances of species identification, as they continue to provide heavy amounts of complete misinformation on a daily basis. Users may be able to provide feedback, but should a casual user look to the AI for an explanation, many would not realize if what they are being told is wrong.
Furthermore, while the data is not entirely my concern, as the service has been using our data for years to train its 'computer vision' model into what it is today, and they claim to have ways to credit people in place, it does make it quite concerning that Google is involved in this deal. I can't say for certain that they will do anything more with the data given, but Google has proven time and again to be highly untrustworthy as a company.
Though, that is something I'm less concerned by than I am by the fact that a non-profit so dedicated to the biodiversity of the earth and the naturalists on it would even dare lock in a deal of this nature. Not only making a deal to create yet another shoehorned misinformation machine, that which has been proven to use more unclean energy and water (among other things) than it's worth for each unsatisfactory and untrustworthy search answer, but doing so with one of the greediest companies on the face of the earth, a beacon of smog shining in colors antithetical to the iNaturalist mission statement. It's a disgrace.
In conclusion, I want to believe in the good of iNaturalist. The point stands, though, that to do this is a step in the worst possible direction. Especially when they, for all intents and purposes, already had a system that works! With their 'computer vision' model providing basic suggestions (if not always accurate in and of itself), and user suggested IDs providing further details and corrections where needed.
If you're an iNaturalist user who stands in opposition to this decision, leave a comment on this blog post, and maybe we can get this overturned.
[Note: Yes, I am aware there is good AI used in science, this is generative AI, which is a different thing entirely. Also, if you come onto this post with strawmen or irrelevant edge-cases I will wring your neck.]
2K notes · View notes
techtrends · 2 years ago
Text
How Computer Vision in AIoT is Reshaping Industries
Tumblr media
Understanding Computer Vision in AIoT
At its core, Computer Vision enables machines to process and decipher visual information extracted from the real world. In the context of AIoT, this entails the integration of AI algorithms and deep learning models into IoT devices armed with cameras and sensors. This amalgamation significantly enhances the cognitive abilities of these devices, granting them the power to automate tasks that once demanded human expertise.
Applications in Diverse Industries
Healthcare:
The realm of healthcare is undergoing a monumental transformation through Computer Vision in AIoT. Cutting-edge medical imaging, early detection of diseases, and personalized treatment plans are all being revolutionized. Cameras embedded within IoT devices can scrutinize facial expressions and vital signs, thus evaluating patient health and emotional well-being. This breakthrough leads to elevated healthcare delivery and improved outcomes.
Manufacturing:
The manufacturing sector is embracing a paradigm shift powered by AIoT-imbued Computer Vision. Cameras seamlessly integrated into assembly lines are adept at identifying defects, ensuring product quality, and monitoring worker safety. The outcome is heightened efficiency and reduced production costs.
Retail:
In the retail sector, the fusion of Computer Vision service and AIoT is elevating customer experiences and optimizing operations. Smart cameras and sensors adeptly track customer movements, analyze purchasing patterns, and manage inventory levels. This synergy translates into personalized recommendations and proactive restocking strategies.
Agriculture:
The realm of agriculture is undergoing a metamorphosis courtesy of AIoT-driven Computer Vision. Drones equipped with cameras take charge of precision farming by monitoring crops, gauging soil health, and detecting pest intrusions. This data-driven approach optimizes yield and resource allocation.
Smart Cities:
Within urban landscapes, AIoT-powered Computer Vision is a catalyst for enhanced safety and efficiency. Intelligent cameras monitor traffic flow, identify license plates, and swiftly detect accidents. This contributes to superior traffic management and bolstered public safety.
0 notes
drcpanda12 · 2 years ago
Photo
Tumblr media
New Post has been published on https://www.knewtoday.net/the-rise-of-openai-advancing-artificial-intelligence-for-the-benefit-of-humanity/
The Rise of OpenAI: Advancing Artificial Intelligence for the Benefit of Humanity
Tumblr media
OpenAI is a research organization that is focused on advancing artificial intelligence in a safe and beneficial manner. It was founded in 2015 by a group of technology luminaries, including Elon Musk, Sam Altman, Greg Brockman, and others, with the goal of creating AI that benefits humanity as a whole.
OpenAI conducts research in a wide range of areas related to AI, including natural language processing, computer vision, robotics, and more. It also develops cutting-edge AI technologies and tools, such as the GPT series of language models, which have been used in a variety of applications, from generating realistic text to aiding in scientific research.
In addition to its research and development work, OpenAI is also committed to promoting transparency and safety in AI. It has published numerous papers on AI ethics and governance and has advocated for responsible AI development practices within the industry and among policymakers.
Introduction to OpenAI: A Brief History and Overview
An American artificial intelligence (AI) research facility called OpenAI is made as a non-profit organization. OpenAI Limited Partnership is its for-profit sister company. The stated goal of OpenAI’s AI research is to advance and create a benevolent AI. Microsoft’s Azure supercomputing platform powers OpenAI systems.
Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba created OpenAI in 2015; the inaugural board of directors included Sam Altman and Elon Musk. Microsoft invested $1 billion in OpenAI LP in 2019 and another $10 billion in 2023.
Brockman compiled a list of the “top researchers in the field” after meeting Yoshua Bengio, one of the “founding fathers” of the deep learning movement. In December 2015, Brockman was able to bring on nine of them as his first workers. In 2016, OpenAI paid business compensation rather than nonprofit payments to its AI researchers, but not salaries that were on par with Facebook or Google.
Several researchers joined the company because of OpenAI’s potential and mission; one Google employee claimed he was willing to leave the company “partly because of the very strong group of people and, to a very big extent, because of its mission.” Brockman said that advancing humankind’s ability to create actual AI in a secure manner was “the best thing I could imagine doing.” Wojciech Zaremba, a co-founder of OpenAI, claimed that he rejected “borderline ridiculous” offers of two to three times his market value in order to join OpenAI.
A public beta of “OpenAI Gym,” a platform for reinforcement learning research, was made available by OpenAI in April 2016. “Universe,” a software platform for assessing and honing an AI’s general intelligence throughout the universe of games, websites, and other applications, was made available by OpenAI in December 2016.
OpenAI’s Research Areas: Natural Language Processing, Computer Vision, Robotics, and More
In 2021, OpenAI will concentrate its research on reinforcement learning (RL).
Gym
Gym, which was introduced in 2016, intends to offer a general-intelligence benchmark that is simple to deploy across a wide range of environments—similar to, but more extensive than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research. In order to make published research more easily replicable, it aims to standardize how environments are characterized in publications on AI. The project asserts that it offers a user-friendly interface. The gym may only be used with Python as of June 2017. The Gym documentation site was no longer maintained as of September 2017, and its GitHub page was the site of ongoing activity.
RoboSumo
In the 2017 virtual reality game RoboSumo, humanoid meta-learning robot agents compete against one another with the aim of learning how to move and shoving the rival agent out of the arena. When an agent is taken out of this virtual environment and placed in a different virtual environment with strong gusts, the agent braces to stay upright, indicating it has learned how to balance in a generic fashion through this adversarial learning process. Igor Mordatch of OpenAI contends that agent competition can lead to an intelligence “arms race,” which can improve an agent’s capacity to perform, even outside of the confines of the competition.
Video game bots
In the competitive five-on-five video game Dota 2, a squad of five OpenAI-curated bots known as OpenAI Five is utilized. These bots are trained to compete against human players at a high level solely by trial-and-error techniques. The first public demonstration took place at The International 2017, the yearly premier championship event for the game, where Dendi, a professional Ukrainian player, lost to a bot in a real-time one-on-one matchup before becoming a team of five. Greg Brockman, CTO, revealed after the game that the bot had learned by competing against itself for two weeks in real-time, and that the learning software was a step toward developing software that could perform intricate jobs like a surgeon.
By June 2018, the bots had improved to the point where they could play as a full team of five, defeating teams of amateur and semi-professional players. OpenAI Five competed in two exhibition games at The International 2018 against top players, but they both lost. In a live demonstration game in San Francisco in April 2019, OpenAI Five upset OG, the current global champions of the game, 2:0.During that month, the bots made their last public appearance, winning 99.4% of the 42,729 games they participated in over a four-day open internet competition.
Dactyl
In 2018 Dactyl uses machine learning to teach a Shadow Hand, a robotic hand that resembles a human hand, how to manipulate actual objects. It uses the same RL algorithms and training code as OpenAI Five to learn totally in simulation. Domain randomization, a simulation method that exposes the learner to a variety of experiences rather than attempting to match them to reality, was used by OpenAI to address the object orientation problem. Dactyl’s setup includes RGB cameras in addition to motion tracking cameras so that the robot may control any object simply by looking at it. In 2018, OpenAI demonstrated that the program could control a cube and an octagonal prism.
2019 saw OpenAI present Dactyl’s ability to solve a Rubik’s Cube. 60% of the time, the robot was successful in resolving the puzzle. It is more difficult to model the complex physics introduced by items like Rubik’s Cube. This was resolved by OpenAI by increasing Dactyl’s resistance to disturbances; they did this by using a simulation method known as Automated Domain Randomization (ADR),
OpenAI’s GPT model
Alec Radford and his colleagues wrote the initial study on generative pre-training of a transformer-based language model, which was released as a preprint on OpenAI’s website on June 11, 2018. It demonstrated how pre-training on a heterogeneous corpus with lengthy stretches of continuous text allows a generative model of language to gain world knowledge and understand long-range dependencies.
A language model for unsupervised transformers, Generative Pre-trained Transformer 2 (or “GPT-2”) is the replacement for OpenAI’s first GPT model. The public initially only saw a few number of demonstrative copies of GPT-2 when it was first disclosed in February 2019. GPT-2’s complete release was delayed due to worries about potential abuse, including uses for creating fake news. Some analysts questioned whether GPT-2 posed a serious threat.
It was trained on the WebText corpus, which consists of little more than 8 million documents totaling 40 gigabytes of text from Links published in Reddit contributions that have received at least three upvotes. Adopting byte pair encoding eliminates some problems that can arise when encoding vocabulary with word tokens. This makes it possible to express any string of characters by encoding both single characters and tokens with multiple characters.
GPT-3
Benchmark results for GPT-3 were significantly better than for GPT-2. OpenAI issued a warning that such language model scaling up might be nearing or running into the basic capabilities limitations of predictive language models.
Many thousand petaflop/s-days of computing were needed for pre-training GPT-3 as opposed to tens of petaflop/s-days for the complete GPT-2 model. Similar to its predecessor, GPT-3’s fully trained model wasn’t immediately made available to the public due to the possibility of abuse, but OpenAI intended to do so following a two-month free private beta that started in June 2020. Access would then be made possible through a paid cloud API.
GPT-4
The release of the text- or image-accepting Generative Pre-trained Transformer 4 (GPT-4) was announced by OpenAI on March 14, 2023. In comparison to the preceding version, GPT-3.5, which scored in the bottom 10% of test takers,
OpenAI said that the revised technology passed a simulated law school bar exam with a score in the top 10% of test takers. GPT-4 is also capable of writing code in all of the major programming languages and reading, analyzing, or producing up to 25,000 words of text.
DALL-E and CLIP images
DALL-E, a Transformer prototype that was unveiled in 2021, generates visuals from textual descriptions. CLIP, which was also made public in 2021, produces a description for an image.
DALL-E interprets natural language inputs (such as an astronaut riding on a horse)) and produces comparable visuals using a 12-billion-parameter version of GPT-3. It can produce pictures of both actual and unreal items.
ChatGPT and ChatGPT Plus
An artificial intelligence product called ChatGPT, which was introduced in November 2022 and is based on GPT-3, has a conversational interface that enables users to ask queries in everyday language. The system then provides an answer in a matter of seconds. Five days after its debut, ChatGPT had one million members.
ChatGPT Plus is a $20/month subscription service that enables users early access to new features, faster response times, and access to ChatGPT during peak hours.
Ethics and Safety in AI: OpenAI’s Commitment to Responsible AI Development
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, concerns around its ethics and safety have become increasingly urgent. OpenAI, a research organization focused on advancing AI in a safe and beneficial manner, has made a commitment to responsible AI development that prioritizes transparency, accountability, and ethical considerations.
One of the ways that OpenAI has demonstrated its commitment to ethical AI development is through the publication of numerous papers on AI ethics and governance. These papers explore a range of topics, from the potential impact of AI on society to the ethical implications of developing powerful AI systems. By engaging in these discussions and contributing to the broader AI ethics community, OpenAI is helping to shape the conversation around responsible AI development.
Another way that OpenAI is promoting responsible AI development is through its focus on transparency. The organization has made a point of sharing its research findings, tools, and technologies with the wider AI community, making it easier for researchers and developers to build on OpenAI’s work and improve the overall quality of AI development.
In addition to promoting transparency, OpenAI is also committed to safety in AI. The organization recognizes the potential risks associated with developing powerful AI systems and has taken steps to mitigate these risks. For example, OpenAI has developed a framework for measuring AI safety, which includes factors like robustness, alignment, and transparency. By considering these factors throughout the development process, OpenAI is working to create AI systems that are both powerful and safe.
OpenAI has also taken steps to ensure that its own development practices are ethical and responsible. The organization has established an Ethics and Governance board, made up of external experts in AI ethics and policy, to provide guidance on OpenAI’s research and development activities. This board helps to ensure that OpenAI’s work is aligned with its broader ethical and societal goals.
Overall, OpenAI’s commitment to responsible AI development is an important step forward in the development of AI that benefits humanity as a whole. By prioritizing ethics and safety, and by engaging in open and transparent research practices, OpenAI is helping to shape the future of AI in a positive and responsible way.
Conclusion: OpenAI’s Role in Shaping the Future of AI
OpenAI’s commitment to advancing AI in a safe and beneficial manner is helping to shape the future of AI. The organization’s focus on ethical considerations, transparency, and safety in AI development is setting a positive example for the broader AI community.
OpenAI’s research and development work is also contributing to the development of cutting-edge AI technologies and tools. The GPT series of language models, developed by OpenAI, have been used in a variety of applications, from generating realistic text to aiding in scientific research. These advancements have the potential to revolutionize the way we work, communicate, and learn.
In addition, OpenAI’s collaborations with industry leaders and their impact on real-world applications demonstrate the potential of AI to make a positive difference in society. By developing AI systems that are safe, ethical, and transparent, OpenAI is helping to ensure that the benefits of AI are shared by all.
As AI continues to evolve and become more integrated into our daily lives, the importance of responsible AI development cannot be overstated. OpenAI’s commitment to ethical considerations, transparency, and safety is an important step forward in creating AI that benefits humanity as a whole. By continuing to lead the way in responsible AI development, OpenAI is helping to shape the future of AI in a positive and meaningful way.
Best Text to Speech AI Voices
1 note · View note
mbari-blog · 1 year ago
Text
Tumblr media
We’ve got our eyes on you 👀
Using 3D imaging and computational modeling, researchers from The University of Western Australia, the UWA Oceans Institute, the Smithsonian’s National Museum of Natural History, and MBARI compared the structure and function of the eyes of hyperiid amphipods, shrimp-like crustaceans that dwell in the dim waters of the ocean’s twilight zone. 
Tumblr media
Hyperia has evolved eyes that keep watch on a wide field of view but can only visualize objects nearby. Phronima—commonly known as the barrel amphipod—can see well into the distance, but at the cost of a very narrow field of view. Phronima has solved this problem by evolving a second pair of eyes for wide, but poor, peripheral vision. 
Learn more about the barrel amphipod in our Animals of the Deep gallery.
Images courtesy of MBARI Adjunct Karen Osborn, Smithsonian Institution
602 notes · View notes
dragonnarrative-writes · 3 months ago
Text
Generative AI Is Bad For Your Creative Brain
In the wake of early announcing that their blog will no longer be posting fanfiction, I wanted to offer a different perspective than the ones I’ve been seeing in the argument against the use of AI in fandom spaces. Often, I’m seeing the arguments that the use of generative AI or Large Language Models (LLMs) make creative expression more accessible. Certainly, putting a prompt into a chat box and refining the output as desired is faster than writing a 5000 word fanfiction or learning to draw digitally or traditionally. But I would argue that the use of chat bots and generative AI actually limits - and ultimately reduces - one’s ability to enjoy creativity.
Creativity, defined by the Cambridge Advanced Learner’s Dictionary & Thesaurus, is the ability to produce or use original and unusual ideas. By definition, the use of generative AI discourages the brain from engaging with thoughts creatively. ChatGPT, character bots, and other generative AI products have to be trained on already existing text. In order to produce something “usable,” LLMs analyzes patterns within text to organize information into what the computer has been trained to identify as “desirable” outputs. These outputs are not always accurate due to the fact that computers don’t “think” the way that human brains do. They don’t create. They take the most common and refined data points and combine them according to predetermined templates to assemble a product. In the case of chat bots that are fed writing samples from authors, the product is not original - it’s a mishmash of the writings that were fed into the system.
Dialectical Behavioral Therapy (DBT) is a therapy modality developed by Marsha M. Linehan based on the understanding that growth comes when we accept that we are doing our best and we can work to better ourselves further. Within this modality, a few core concepts are explored, but for this argument I want to focus on Mindfulness and Emotion Regulation. Mindfulness, put simply, is awareness of the information our senses are telling us about the present moment. Emotion regulation is our ability to identify, understand, validate, and control our reaction to the emotions that result from changes in our environment. One of the skills taught within emotion regulation is Building Mastery - putting forth effort into an activity or skill in order to experience the pleasure that comes with seeing the fruits of your labor. These are by no means the only mechanisms of growth or skill development, however, I believe that mindfulness, emotion regulation, and building mastery are a large part of the core of creativity. When someone uses generative AI to imitate fanfiction, roleplay, fanart, etc., the core experience of creative expression is undermined.
Creating engages the body. As a writer who uses pen and paper as well as word processors while drafting, I had to learn how my body best engages with my process. The ideal pen and paper, the fact that I need glasses to work on my computer, the height of the table all factor into how I create. I don’t use audio recordings or transcriptions because that’s not a skill I’ve cultivated, but other authors use those tools as a way to assist their creative process. I can’t speak with any authority to the experience of visual artists, but my understanding is that the feedback and feel of their physical tools, the programs they use, and many other factors are not just part of how they learned their craft, they are essential to their art.
Generative AI invites users to bypass mindfully engaging with the physical act of creating. Part of becoming a person who creates from the vision in one’s head is the physical act of practicing. How did I learn to write? By sitting down and making myself write, over and over, word after word. I had to learn the rhythms of my body, and to listen when pain tells me to stop. I do not consider myself a visual artist - I have not put in the hours to learn to consistently combine line and color and form to show the world the idea in my head.
But I could.
Learning a new skill is possible. But one must be able to regulate one’s unpleasant emotions to be able to get there. The emotion that gets in the way of most people starting their creative journey is anxiety. Instead of a focus on “fear,” I like to define this emotion as “unpleasant anticipation.” In Atlas of the Heart, Brene Brown identifies anxiety as both a trait (a long term characteristic) and a state (a temporary condition). That is, we can be naturally predisposed to be impacted by anxiety, and experience unpleasant anticipation in response to an event. And the action drive associated with anxiety is to avoid the unpleasant stimulus.
Starting a new project, developing a new skill, and leaning into a creative endevor can inspire and cause people to react to anxiety. There is an unpleasant anticipation of things not turning out exactly correctly, of being judged negatively, of being unnoticed or even ignored. There is a lot less anxiety to be had in submitting a prompt to a machine than to look at a blank page and possibly make what could be a mistake. Unfortunately, the more something is avoided, the more anxiety is generated when it comes up again. Using generative AI doesn’t encourage starting a new project and learning a new skill - in fact, it makes the prospect more distressing to the mind, and encourages further avoidance of developing a personal creative process.
One of the best ways to reduce anxiety about a task, according to DBT, is for a person to do that task. Opposite action is a method of reducing the intensity of an emotion by going against its action urge. The action urge of anxiety is to avoid, and so opposite action encourages someone to approach the thing they are anxious about. This doesn’t mean that everyone who has anxiety about creating should make themselves write a 50k word fanfiction as their first project. But in order to reduce anxiety about dealing with a blank page, one must face and engage with a blank page. Even a single sentence fragment, two lines intersecting, an unintentional drop of ink means the page is no longer blank. If those are still difficult to approach a prompt, tutorial, or guided exercise can be used to reinforce the understanding that a blank page can be changed, slowly but surely by your own hand.
(As an aside, I would discourage the use of AI prompt generators - these often use prompts that were already created by a real person without credit. Prompt blogs and posts exist right here on tumblr, as well as imagines and headcannons that people often label “free to a good home.” These prompts can also often be specific to fandom, style, mood, etc., if you’re looking for something specific.)
In the current social media and content consumption culture, it’s easy to feel like the first attempt should be a perfect final product. But creating isn’t just about the final product. It’s about the process. Bo Burnam’s Inside is phenomenal, but I think the outtakes are just as important. We didn’t get That Funny Feeling and How the World Works and All Eyes on Me because Bo Burnham woke up and decided to write songs in the same day. We got them because he’s been been developing and honing his craft, as well as learning about himself as a person and artist, since he was a teenager. Building mastery in any skill takes time, and it’s often slow.
Slow is an important word, when it comes to creating. The fact that skill takes time to develop and a final piece of art takes time regardless of skill is it’s own source of anxiety. Compared to @sentientcave, who writes about 2k words per day, I’m very slow. And for all the time it takes me, my writing isn’t perfect - I find typos after posting and sometimes my phrasing is awkward. But my writing is better than it was, and my confidence is much higher. I can sit and write for longer and longer periods, my projects are more diverse, I’m sharing them with people, even before the final edits are done. And I only learned how to do this because I took the time to push through the discomfort of not being as fast or as skilled as I want to be in order to learn what works for me and what doesn’t.
Building mastery - getting better at a skill over time so that you can see your own progress - isn’t just about getting better. It’s about feeling better about your abilities. Confidence, excitement, and pride are important emotions to associate with our own actions. It teaches us that we are capable of making ourselves feel better by engaging with our creativity, a confidence that can be generalized to other activities.
Generative AI doesn’t encourage its users to try new things, to make mistakes, and to see what works. It doesn’t reward new accomplishments to encourage the building of new skills by connecting to old ones. The reward centers of the brain have nothing to respond to to associate with the action of the user. There is a short term input-reward pathway, but it’s only associated with using the AI prompter. It’s designed to encourage the user to come back over and over again, not develop the skill to think and create for themselves.
I don’t know that anyone will change their minds after reading this. It’s imperfect, and I’ve summarized concepts that can take months or years to learn. But I can say that I learned something from the process of writing it. I see some of the flaws, and I can see how my essay writing has changed over the years. This might have been faster to plug into AI as a prompt, but I can see how much more confidence I have in my own voice and opinions. And that’s not something chatGPT can ever replicate.
151 notes · View notes
kenyatta · 5 months ago
Text
https://www.metamute.org/editorial/articles/californian-ideology
There is an emerging global orthodoxy concerning the relation between society, technology and politics. We have called this orthodoxy `the Californian Ideology' in honour of the state where it originated. By naturalising and giving a technological proof to a libertarian political philosophy, and therefore foreclosing on alternative futures, the Californian Ideologues are able to assert that social and political debates about the future have now become meaningless.  The California Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism and is promulgated by magazines such as WIRED and MONDO 2000 and preached in the books of Stewart Brand, Kevin Kelly and others. The new faith P has been embraced by computer nerds, slacker students, 30-something capitalists, hip academics, futurist bureaucrats and even the President of the USA himself. As usual, Europeans have not been slow to copy the latest fashion from America. While a recent EU report recommended adopting the Californian free enterprise model to build the 'infobahn', cutting-edge artists and academics have been championing the 'post-human' philosophy developed by the West Coast's Extropian cult. With no obvious opponents, the global dominance of the Californian ideology appears to be complete. On superficial reading, the writings of the Californian ideologists are an amusing cocktail of Bay Area cultural wackiness and in-depth analysis of the latest developments in the hi-tech arts, entertainment and media industries. Their politics appear to be impeccably libertarian - they want information technologies to be used to create a new `Jeffersonian democracy' in cyberspace in its certainties, the Californian ideology offers a fatalistic vision of the natural and inevitable triumph of the hi-tech free market.
from "The Californian Ideology" by Richard Barbrook and Andy Cameron, 1 September 1995
201 notes · View notes
fuerst-von-plan1 · 9 months ago
Text
Die Rolle von KI in der effizienten Echtzeit-Datenverarbeitung
In der heutigen digitalen Ära spielt die Echtzeit-Datenverarbeitung eine entscheidende Rolle in verschiedenen Branchen, von Finanzdienstleistungen bis hin zu Gesundheitswesen und Internet der Dinge (IoT). Die enorme Menge an Daten, die kontinuierlich generiert wird, erfordert fortschrittliche Technologien, um relevante Informationen in Echtzeit zu extrahieren und zu analysieren. Künstliche…
0 notes
knight-a3 · 4 months ago
Text
Hazbin Masterpost
Heavenbound Masterpost
Vox, the noisy video box
Tumblr media Tumblr media
So Vox may not be my favorite character, but he is probably my favorite redesign. I laugh every time I look at him now. He looks like a weird mix of Spongebob, Kraang(TMNT), and Mr. Electric(Sharkboy and Lavagirl). He absolutely hates it.
Notes under the cut
There's too many twinks in this show. So when I was trying to decide which characters I could change, for body diversity, Vox was an obvious one. He needed more bulk so his body could conceivably support the old TV models. Those things could get heavy. The change also had the side effect of making him shorter, which just worked better proportionately.
Tumblr media
I liked the idea that Vox could never get rid of his original bulky 50s TV, but also wanted him to be able to upgrade. So I decided his true body is the 50s TV, and he adds an upgraded monitor for a head as technology improves. He's hates that he's stuck as an old fashioned TV, so he hides that under his suit. Since the monitor is just an addition, it can be swapped out easily. It can be damaged and he's technically unharmed. But he can't see through his suit without the monitor, unless he wants to use a security camera and direct himself 3rd person style.
I didn't like that basically everyone has sharp teeth. It reduces the impact for characters like Alastor or Rosie. So I've been having the default be just sharp canines. But with Vox being a TV, there are so many possibilities. I gave Vox "regular" teeth, which helps him look more trustworthy. It fits the corrupt businessman vibe. But the appearance can change with his mood too.
Color TV became available in the 50s, so Vox always had color vision. But I think it'd be funny if, early on, he had a tendency to glitch out by going into black and white vision when he gets worked up. He's mostly grown out of that glitch, but he can't seem to shake the static or TV color bars, and developed new ones as he integrated computer and internet tech into himself as well. Now he gets the Blue Screen of Death, system errors, and city wide power surges.
Messing around with his face is so fun. When he's bored or tired a Voxtech logo will bounce around like the DVD logo, or display a screensaver. His face can get too big for the screen when he's excited, or be small when he's feeling embarrassed. I need to put a troll face on him at some point. It may be an old meme, but man, it feels right.
His left eye turns red when it's hypnotic, to reference those blue and red 3D glasses.
Of the three Vees, he is absolutely the most powerful. Val and Vel are the content creators, but Vox is the platform. The other two, while still powerful in their own right, would never have gotten to the level they're at if it weren't for Vox. He controls the mainstream media.
--TV set--
So we've got some interesting implications with how he functions. He's a TV, but he blue screens like a computer, and he shorts out the power grid. I think it's safe to say he is more than just a TV, he's a multimedia entertainment center. That, and TVs are starting to really blend with computers these days. He's mainstream media.
At some point, I realized that a TV set was a "set" because it wasn't just a single device. A television set was a collection of components, which boils down to a radio hooked up and synchronized to a visual display. I bring this up mostly because I am a sucker for one-sided radiostatic. It's so funny to me. Vox is obsessed.
But I'm going to refrain from too much theorizing about their relationship. Alastor is absolutely not interested in romance. Nor a QPR. He's not even interested in friendship. Alastor is too invested in power dynamics to really consider anyone a friend. Mimzy is probably the closest he has to a friend, and even that has manipulative elements on both sides. But I'm supposed to be talking about Vox!
--Human Vox!--
He is not tall, haha. But his proportions are a bit taller than his demon form. I wanted to go for square glasses, but I didn't see many examples of that in the 50s photos I found. Oh well! My goal was a sleazy business man. He probably had a variety of jobs, but they primarily involved TV. Commercials, PR, interviews, news, game shows, talk shows, screenwriting, etc. Whatever he could do to get more influence. He found himself favoring the business end of things. Making deals and pulling strings. He decided what would go on the air. He's one of those network executive types.
I see lots of people give him heterochromia, but I don't really see a point to that. He hypnotizes people with his left eye, sure, but it's not a different color. It's not disfigured in any way either. Maybe he just had a tendency to wink at people, I dunno.
I think his death involved some sort of severe skull fracture focused around his left eye. Maybe a car accident, maybe he was shot, idk. Maybe seizures were involved. But he was somewhere in his mid 40s to early 50s. I ended up writing 45, but I'm not super committed to that or anything.
For a human name, I see lots of people calling him Vincent and that's sorta grown on me. So I might go with "Vincent Cox".
And because I fell into another research rabbit hole...
--TV evolution--
(below) 50s-60s CRT TV: TV sets were treated as furniture and there could be some very interesting cabinet designs. Color TV was introduced in the 50s, but wasn't quite profitable until the late 60s.
Tumblr media Tumblr media Tumblr media
(below) 70s-80s CRT TV: Color TV became more affordable and commonplace.
Tumblr media Tumblr media Tumblr media
(below) 90s CRT TV
Tumblr media Tumblr media Tumblr media
(below) 2000s CRT to Plasma and LCD TVs: The three display technologies competed, but LCD won out in the end. Plasma and early LCD didn't look substantially different. Plasma was a little bulkier, but was still slimmer than CRT.
Tumblr media Tumblr media Tumblr media
2010s and on: LCD improved with LED backlighting. But then OLED removed the need for backlighting entirely, which mixed the benefits of plasma and LCD. (Didn't bother to find a picture example. It's so close to modern at this point)
--Display technology-- (These overviews are very simplified)
CRT(Cathode Ray Tube)--Used through the 1900s to approx 2010. Monochromatic until Color TV developed aroung the 1950s. Worked via vacuum tubes and electron gun that lit up the pixels. They were bulky, heavy, and used a whole lot of power. Widely considered obsolete and no longer made. Video games made while these were in use tend to look better in CRT, since the graphics accounted for the image quality.
Flat screens-
PDP (Plasma Display Panel): Used from early 2000s to approx 2015. Used gas cells that light up pixels when electrically charged. Good image quality and good contrast, but expensive, heavy, and used a lot of power. Considered obsolete and no longer made, despite still having a desirable image quality.
Plasma and LCD competed in the 2000s to early 2010s as CRT popularity waned. LCD eventually won out due to weight and overall cost(including market price and energy efficiency).
LCD (Liquid Crystal Display): Introduced for TV around the same time as Plasma. Works via a liquid crystal layer with a backlight. Slim, decent image quality, energy efficient. Viewing angle matters because image colors are warped at wide angles. Cheaper than plasma. There are two main backlighting types:
--CCFL(Cold Cathode Fluorescent Light): Used fluorescent lighting for the backlight. Image quality was decent, but didn't have good contrast. (the blacks were never truly dark because of the backlight)
--LED(Light Emitting Diode): An LCD that uses LEDs instead of CCFL for the backlighting. Better contrast and efficiency than using CCFL.
OLED(Organic LED): Mixes strengths of plasma and LCD. Self emitting LEDs. No backlight or LCD panel needed, which improves contrast(about as good as plasma was, which is why plasma is basically obsolete now).
--QD-OLED(Quantum Dot- OLED) Adds a layer of Quantum dots to an OLED to improve color gamut. I think. I can't let myself fall too far into this rabbit hole, so I'm not double checking anymore.
((Feb 12, 2025-updated tags)
157 notes · View notes
kimberly-spirits13 · 1 year ago
Note
I wanted a batboys headcanon, a reader brother who is an Android, like Marvel's Vision, what would it be like for them to have a brother who is a robot? Pleaseee me notice
Hellooooo thanks for the requests 😂 I’m in AP Exam season so I’m not on here too much rn but here’s my two cents on the matter
okay so if batbro is like Vision, there’s some sort of power source that is VERY visible and easy to access
They are all super protective of you in that sense since they don’t want you getting hurt *rip vision* *you were ✨almost✨ indestructible*
Bruce or Tim would probably make some sort of mask or armor to protect the energy source
When you were younger or first introduced into the batfam, they’re always running diagnostics and computer programs to see what’s up with you so get used to it
Jason had a can of WD-40 that he threatens you with sometimes to be funny
Bruce is always concerned that you’ll get programmed or hacked by a villain and they’ll have to have some sort of terrible contingency plan
Dick is mostly just trying to help you incorporate into society normally
It helps if you can disguise the robot/ cyborg part but he’s always making sure your social life is an A+
Damian wants to push your limits and see how far you can train and fight
It takes him a while to not see you as a computer tbh but he comes around after you save his butt a few times
Duke models his armor off of your systems and Luke is typically trying to incorporate little pieces of it into Wayne Tech
Stephanie is the type to write on you with expo markers if you have a metal armor body or something
Cass doesn’t treat you as if you weren’t human but treats you like everyone else so she’s super awesome to be around
Babs has definitely sat down with you to go through diagnostics if you’re cool with it and she’s the best for that since she’s not testing you like a machine but trying to understand how you can best live
You’re often called in for League problems and the entire batfam worries about it since there protective
Alfred doesn’t bat an eye and anything since he’s used to everything by now
Is of course, very kind and understanding of anything you need or are going through
450 notes · View notes